Context is vital for commonsense moral reasoning. "Lying to a friend" is wrong if it is meant to deceive them, but may be morally okay if it is intended to protect them. Such nuanced but salient contextual information can potentially flip the moral judgment of an action. Thus, we present ClarifyDelphi, an interactive system that elicits missing contexts of a moral situation by generating clarification questions such as "Why did you lie to your friend?". Our approach is inspired by the observation that questions whose potential answers lead to diverging moral judgments are the most informative. We learn to generate questions using Reinforcement Learning, by maximizing the divergence between moral judgements of hypothetical answers to a question. Human evaluation shows that our system generates more relevant, informative and defeasible questions compared to other question generation baselines. ClarifyDelphi assists informed moral reasoning processes by seeking additional morally consequential context to disambiguate social and moral situations.
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
我们挑战AI模型,以“展示”对《纽约客》标题比赛的复杂多模式幽默的理解。具体而言,我们开发了三个精心限制的任务,以掌握图像和标题之间的潜在复杂和意外的关系,并且对人类经验的广泛品种产生了复杂和意外的寓意;这些是纽约口径卡通的标志。我们调查了直接将卡通像素和字幕输入的视觉和语言模型,以及仅通过提供图像的文本描述来规避图像处理的仅限语言模型。即使我们为卡通图像提供了丰富的多方面注释,我们也可以确定高质量的机器学习模型(例如,微调,175b参数语言模型)和人类之间的性能差距。我们公开发布我们的语料库,包括描述图像的位置/实体的注释,场景的不寻常以及对笑话的解释。
translated by 谷歌翻译
人类具有出色的能力来推理绑架并假设超出图像的字面内容的内容。通过识别散布在整个场景中的具体视觉线索,我们几乎不禁根据我们的日常经验和对世界的知识来提出可能的推论。例如,如果我们在道路旁边看到一个“ 20英里 /小时”的标志,我们可能会假设街道位于居民区(而不是在高速公路上),即使没有房屋。机器可以执行类似的视觉推理吗?我们提出了Sherlock,这是一个带注释的103K图像的语料库,用于测试机器能力,以超出字面图像内容的绑架推理。我们采用免费观看范式:参与者首先观察并识别图像中的显着线索(例如,对象,动作),然后给定线索,然后提供有关场景的合理推论。我们总共收集了363K(线索,推理)对,该对形成了首个绑架的视觉推理数据集。使用我们的语料库,我们测试了三个互补的绑架推理轴。我们评估模型的能力:i)从大型候选人语料库中检索相关推论; ii)通过边界框来定位推论的证据,iii)比较合理的推论,以匹配人类在新收集的19k李克特级判断的诊断语料库上的判断。尽管我们发现具有多任务目标的微调夹RN50x64优于强大的基准,但模型性能与人类一致之间存在着重要的净空。可在http://visualabduction.com/上获得数据,模型和排行榜
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from-machine-to-corpus-to-machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al., 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically-as text-in addition to the neural model. We also distill only one aspect-the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model's commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models.
translated by 谷歌翻译
近年来带来了对自然语言理解领域的勤义代表和推理的重新兴趣。新的致辞知识图表(CSKG)的发展是这些进步的核心,因为他们的不同事实可以通过机器学习模型来解决新的和具有挑战性的任务。与此同时,由于全面地涵盖了一般勤杂朗知识所需的大规模规模,对这些资源的质量和覆盖率仍存在疑问。在这项工作中,我们将手动构建的CSKGS分配在NLP代理商遇到的所有情况下,我们将永远不会实现适用所需的覆盖范围。因此,我们提出了一种新的评估框架,用于测试KGS的效用,基于如何从中学习有效的隐式知识表示。通过这一新目标,我们提出了一个含有知识的全新CSKG的新CSKG,该知识不容易获得预用的语言模型。我们与其他领先的CSKG相比,评估其属性,表现了对勤杂朗语言知识资源的第一个大规模对研究。接下来,我们显示原子2020更适合培训知识模型,可以为新的,看不见的实体和事件产生准确,代表知识。最后,通过人类评估,我们表明,尽管使用超过430倍的参数,但GPT-3(175B参数)的几次射击性能较低,而令人印象深刻,令人印象深刻,令人印象深刻,令人印象深刻,仍然低于原子型2020的巴特的知识模型。
translated by 谷歌翻译
本文档提供了SNACS的详细语言描述(Adposition和Case Supersenses的语义网络; Schneider等,2018),这是52个语义标签(“ Supersenses”)的库存,这些标签(“ Supersenses”)表征了在某种程度上使用ADIP定位和案例标记的使用。粒度水平,如Streusle语料库中所示(https://github.com/nert-nlp/streusle/;版本4.5 track track track offelines guidelines guidelines版本2.6)。尽管SNACS的库存渴望成为普遍的,但该文档是特定于英语的。其他语言的文档将单独发布。版本2是Schneider等人对英语提出的超音库存的修订。 (2015,2016)(此后为“ V1”),这又基于以前的计划。本清单是在对英语的V1语料库注释进行广泛审查后开发的,以及以前未分析的属格案例所有人(Blodgett和Schneider,2018年),并考虑了希伯来语,印地语,韩国和德国的定义和案例现象的考虑。 Hwang等。 (2017)介绍了V2方案的理论基础。 Schneider等。 (2018)总结了该方案,其应用于英语语料库数据以及自动歧义任务。刘等。 (2021)提供了一个英语词法语义识别标签仪,其中包括SNACS标签的输出。该文档也可以与Xposition网站上的语料库数据一起浏览(Gessler等,2022):http://www.xposition.org/
translated by 谷歌翻译
受到正规彩票假说(RLTH)的启发,该假说假设在密集网络中存在平稳(非二进制)子网,以实现密集网络的竞争性能,我们提出了几个播放类增量学习(FSCIL)方法。 to as \ emph {soft-subnetworks(softnet)}。我们的目标是逐步学习一系列会议,每个会议在每个课程中只包含一些培训实例,同时保留了先前学到的知识。软网络在基本训练会议上共同学习模型权重和自适应非二进制软面具,每个面具由主要和次要子网组成;前者的目的是最大程度地减少训练期间的灾难性遗忘,而后者的目的是避免在每个新培训课程中过度拟合一些样本。我们提供了全面的经验验证,表明我们的软网络通过超越基准数据集的最先进基准的性能来有效地解决了几个弹药的学习问题。
translated by 谷歌翻译
尽管电子健康记录是生物医学研究的丰富数据来源,但这些系统并未在医疗环境中统一地实施,并且由于医疗保健碎片化和孤立的电子健康记录之间缺乏互操作性,可能缺少大量数据。考虑到缺少数据的案例的删除可能会在随后的分析中引起严重的偏见,因此,一些作者更喜欢采用多重插补策略来恢复缺失的信息。不幸的是,尽管几项文献作品已经通过使用现在可以自由研究的任何不同的多个归档算法记录了有希望的结果,但尚无共识,MI算法效果最好。除了选择MI策略之外,归纳算法及其应用程序设置的选择也至关重要且具有挑战性。在本文中,受鲁宾和范布伦的开创性作品的启发,我们提出了一个方法学框架,可以应用于评估和比较多种多个插补技术,旨在选择用于计算临床研究工作中最有效的推断。我们的框架已被应用于验证和扩展较大的队列,这是我们在先前的文献研究中提出的结果,我们在其中评估了关键患者的描述符和Covid-19的影响在2型糖尿病患者中的影响,其数据为2型糖尿病,其数据为2型糖尿病由国家共同队列合作飞地提供。
translated by 谷歌翻译